337 research outputs found

    KK Users Caching Two Files: An Improved Achievable Rate

    Full text link
    Caching is an approach to smoothen the variability of traffic over time. Recently it has been proved that the local memories at the users can be exploited for reducing the peak traffic in a much more efficient way than previously believed. In this work we improve upon the existing results and introduce a novel caching strategy that takes advantage of simultaneous coded placement and coded delivery in order to decrease the worst case achievable rate with 22 files and KK users. We will show that for any cache size 1K<M<1\frac{1}{K}<M<1 our scheme outperforms the state of the art

    Reliable Physical Layer Network Coding

    Full text link
    When two or more users in a wireless network transmit simultaneously, their electromagnetic signals are linearly superimposed on the channel. As a result, a receiver that is interested in one of these signals sees the others as unwanted interference. This property of the wireless medium is typically viewed as a hindrance to reliable communication over a network. However, using a recently developed coding strategy, interference can in fact be harnessed for network coding. In a wired network, (linear) network coding refers to each intermediate node taking its received packets, computing a linear combination over a finite field, and forwarding the outcome towards the destinations. Then, given an appropriate set of linear combinations, a destination can solve for its desired packets. For certain topologies, this strategy can attain significantly higher throughputs over routing-based strategies. Reliable physical layer network coding takes this idea one step further: using judiciously chosen linear error-correcting codes, intermediate nodes in a wireless network can directly recover linear combinations of the packets from the observed noisy superpositions of transmitted signals. Starting with some simple examples, this survey explores the core ideas behind this new technique and the possibilities it offers for communication over interference-limited wireless networks.Comment: 19 pages, 14 figures, survey paper to appear in Proceedings of the IEE

    Increasing Availability in Distributed Storage Systems via Clustering

    Full text link
    We introduce the Fixed Cluster Repair System (FCRS) as a novel architecture for Distributed Storage Systems (DSS), achieving a small repair bandwidth while guaranteeing a high availability. Specifically we partition the set of servers in a DSS into ss clusters and allow a failed server to choose any cluster other than its own as its repair group. Thereby, we guarantee an availability of s−1s-1. We characterize the repair bandwidth vs. storage trade-off for the FCRS under functional repair and show that the minimum repair bandwidth can be improved by an asymptotic multiplicative factor of 2/32/3 compared to the state of the art coding techniques that guarantee the same availability. We further introduce Cubic Codes designed to minimize the repair bandwidth of the FCRS under the exact repair model. We prove an asymptotic multiplicative improvement of 0.790.79 in the minimum repair bandwidth compared to the existing exact repair coding techniques that achieve the same availability. We show that Cubic Codes are information-theoretically optimal for the FCRS with 22 and 33 complete clusters. Furthermore, under the repair-by-transfer model, Cubic Codes are optimal irrespective of the number of clusters

    Single-Server Multi-Message Private Information Retrieval with Side Information

    Full text link
    We study the problem of single-server multi-message private information retrieval with side information. One user wants to recover NN out of KK independent messages which are stored at a single server. The user initially possesses a subset of MM messages as side information. The goal of the user is to download the NN demand messages while not leaking any information about the indices of these messages to the server. In this paper, we characterize the minimum number of required transmissions. We also present the optimal linear coding scheme which enables the user to download the demand messages and preserves the privacy of their indices. Moreover, we show that the trivial MDS coding scheme with K−MK-M transmissions is optimal if N>MN>M or N2+N≥K−MN^2+N \ge K-M. This means if one wishes to privately download more than the square-root of the number of files in the database, then one must effectively download the full database (minus the side information), irrespective of the amount of side information one has available.Comment: 12 pages, submitted to the 56th Allerton conferenc

    Compute-and-Forward: Harnessing Interference through Structured Codes

    Get PDF
    Interference is usually viewed as an obstacle to communication in wireless networks. This paper proposes a new strategy, compute-and-forward, that exploits interference to obtain significantly higher rates between users in a network. The key idea is that relays should decode linear functions of transmitted messages according to their observed channel coefficients rather than ignoring the interference as noise. After decoding these linear equations, the relays simply send them towards the destinations, which given enough equations, can recover their desired messages. The underlying codes are based on nested lattices whose algebraic structure ensures that integer combinations of codewords can be decoded reliably. Encoders map messages from a finite field to a lattice and decoders recover equations of lattice points which are then mapped back to equations over the finite field. This scheme is applicable even if the transmitters lack channel state information.Comment: IEEE Trans. Info Theory, to appear. 23 pages, 13 figure

    Cooperative Data Exchange based on MDS Codes

    Full text link
    The cooperative data exchange problem is studied for the fully connected network. In this problem, each node initially only possesses a subset of the KK packets making up the file. Nodes make broadcast transmissions that are received by all other nodes. The goal is for each node to recover the full file. In this paper, we present a polynomial-time deterministic algorithm to compute the optimal (i.e., minimal) number of required broadcast transmissions and to determine the precise transmissions to be made by the nodes. A particular feature of our approach is that {\it each} of the K−dK-d transmissions is a linear combination of {\it exactly} d+1d+1 packets, and we show how to optimally choose the value of d.d. We also show how the coefficients of these linear combinations can be chosen by leveraging a connection to Maximum Distance Separable (MDS) codes. Moreover, we show that our method can be used to solve cooperative data exchange problems with weighted cost as well as the so-called successive local omniscience problem.Comment: 21 pages, 1 figur

    Compute-and-Forward: Finding the Best Equation

    Get PDF
    Compute-and-Forward is an emerging technique to deal with interference. It allows the receiver to decode a suitably chosen integer linear combination of the transmitted messages. The integer coefficients should be adapted to the channel fading state. Optimizing these coefficients is a Shortest Lattice Vector (SLV) problem. In general, the SLV problem is known to be prohibitively complex. In this paper, we show that the particular SLV instance resulting from the Compute-and-Forward problem can be solved in low polynomial complexity and give an explicit deterministic algorithm that is guaranteed to find the optimal solution.Comment: Paper presented at 52nd Allerton Conference, October 201

    The Sampling Rate-Distortion Tradeoff for Sparsity Pattern Recovery in Compressed Sensing

    Full text link
    Recovery of the sparsity pattern (or support) of an unknown sparse vector from a limited number of noisy linear measurements is an important problem in compressed sensing. In the high-dimensional setting, it is known that recovery with a vanishing fraction of errors is impossible if the measurement rate and the per-sample signal-to-noise ratio (SNR) are finite constants, independent of the vector length. In this paper, it is shown that recovery with an arbitrarily small but constant fraction of errors is, however, possible, and that in some cases computationally simple estimators are near-optimal. Bounds on the measurement rate needed to attain a desired fraction of errors are given in terms of the SNR and various key parameters of the unknown vector for several different recovery algorithms. The tightness of the bounds, in a scaling sense, as a function of the SNR and the fraction of errors, is established by comparison with existing information-theoretic necessary bounds. Near optimality is shown for a wide variety of practically motivated signal models
    • …
    corecore